oalib

OALib Journal期刊

ISSN: 2333-9721

费用:99美元

投稿

时间不限

( 2673 )

( 2672 )

( 2024 )

( 2023 )

自定义范围…

匹配条件: “ Philippe Ravaud ” ,找到相关结果约9741条。
列表显示的所有文章,均可免费获取
第1页/共9741条
每页显示
Preventing Bias in Cluster Randomised Trials
Bruno Giraudeau ,Philippe Ravaud
PLOS Medicine , 2009, DOI: 10.1371/journal.pmed.1000065
Abstract:
Blockchain technology for improving clinical research quality
Mehdi Benchoufi,Philippe Ravaud
- , 2017, DOI: 10.1186/s13063-017-2035-z
Abstract:
Outcomes in Registered, Ongoing Randomized Controlled Trials of Patient Education
Cécile Pino,Isabelle Boutron,Philippe Ravaud
PLOS ONE , 2012, DOI: 10.1371/journal.pone.0042934
Abstract: With the increasing prevalence of chronic noncommunicable diseases, patient education is becoming important to strengthen disease prevention and control. We aimed to systematically determine the extent to which registered, ongoing randomized controlled trials (RCTs) evaluated an educational intervention focus on patient-important outcomes (i.e., outcomes measuring patient health status and quality of life).
Impact of Reporting Bias in Network Meta-Analysis of Antidepressant Placebo-Controlled Trials
Ludovic Trinquart, Adeline Abbé, Philippe Ravaud
PLOS ONE , 2012, DOI: 10.1371/journal.pone.0035219
Abstract: Background Indirect comparisons of competing treatments by network meta-analysis (NMA) are increasingly in use. Reporting bias has received little attention in this context. We aimed to assess the impact of such bias in NMAs. Methods We used data from 74 FDA-registered placebo-controlled trials of 12 antidepressants and their 51 matching publications. For each dataset, NMA was used to estimate the effect sizes for 66 possible pair-wise comparisons of these drugs, the probabilities of being the best drug and ranking the drugs. To assess the impact of reporting bias, we compared the NMA results for the 51 published trials and those for the 74 FDA-registered trials. To assess how reporting bias affecting only one drug may affect the ranking of all drugs, we performed 12 different NMAs for hypothetical analysis. For each of these NMAs, we used published data for one drug and FDA data for the 11 other drugs. Findings Pair-wise effect sizes for drugs derived from the NMA of published data and those from the NMA of FDA data differed in absolute value by at least 100% in 30 of 66 pair-wise comparisons (45%). Depending on the dataset used, the top 3 agents differed, in composition and order. When reporting bias hypothetically affected only one drug, the affected drug ranked first in 5 of the 12 NMAs but second (n = 2), fourth (n = 1) or eighth (n = 2) in the NMA of the complete FDA network. Conclusions In this particular network, reporting bias biased NMA-based estimates of treatments efficacy and modified ranking. The reporting bias effect in NMAs may differ from that in classical meta-analyses in that reporting bias affecting only one drug may affect the ranking of all drugs.
A priori postulated and real power in cluster randomized trials: mind the gap
Lydia Guittet, Bruno Giraudeau, Philippe Ravaud
BMC Medical Research Methodology , 2005, DOI: 10.1186/1471-2288-5-25
Abstract: Power contour graphs were drawn to illustrate the loss in power induced by an underestimation of the ICC when planning trials. We also derived the maximum achievable power given a specified ICC.The magnitude of the ICC can have a major impact on power, and with low numbers of clusters, 80% power may not be achievable.Underestimating the ICC during planning cluster randomized trials can lead to a seriously underpowered trial. Publication of a priori postulated and a posteriori estimated ICCs is necessary for a more objective reading: negative trial results may be the consequence of a loss of power due to a mis-specification of the ICC.A cluster randomized trial involves randomizing social units or clusters of individuals, rather than the individuals themselves. This design, which is increasingly used for evaluating health-care, screening and educational interventions [1-3], presents specific constraints that must be considered during planning and analysis [4,5].The responses of individuals within a cluster tend to be more similar than those of individuals of different clusters. This correlation leads to an increased required sample size in randomized trials of clusters compared with that of individuals, although this clustering effect is rarely taken into account. Thus, in a recent review of cluster randomized trials in primary care, Eldridge et al [6] reported that only 20% of studies accounted for clustering in the sample size calculation. Similar results were found in other reviews, as listed by Bland [7]. The increase in sample size is measured through an inflation factor, which is a function of both the cluster size and the intraclass correlation coefficient (ICC), which appraises the correlation between individuals within the same cluster [1-3,8]. Therefore an a priori value for this correlation must be postulated during planning. However, estimates of this correlation are rarely available, and, if available, are often uncertain. Indeed the correlation would di
Planning a cluster randomized trial with unequal cluster sizes: practical issues involving continuous outcomes
Lydia Guittet, Philippe Ravaud, Bruno Giraudeau
BMC Medical Research Methodology , 2006, DOI: 10.1186/1471-2288-6-17
Abstract: We performed simulations to study the impact of an imbalance in cluster size on power. We determined by simulations to which extent four methods proposed to adapt the sample size calculations to a pre-specified imbalance in cluster size could lead to adequately powered trials.We showed that an imbalance in cluster size can be of high influence on the power in the case of severe imbalance, particularly if the number of clusters is low and/or the intraclass correlation coefficient is high. In the case of a severe imbalance, our simulations confirmed that the minimum variance weights correction of the variation inflaction factor (VIF) used in the sample size calculations has the best properties.Publication of cluster sizes is important to assess the real power of the trial which was conducted and to help designing future trials. We derived an adaptation of the VIF from the minimum variance weights correction to be used in case the imbalance can be a priori formulated such as "a proportion (γ) of clusters actually recruit a proportion (τ) of subjects to be included (γ ≤ τ)".A cluster randomized trial involves randomizing social units or clusters of individuals rather than the individuals themselves. This design, which is increasingly being used for evaluating healthcare, screening and educational interventions presents specific constraints that must be considered during planning and analysis [1,2]. Indeed, the responses of individuals within a cluster tend to be more similar than those of individuals of different clusters, and we thus define the clustering effect as 1 + (m - 1)ρ, where m is the average number of subjects per cluster and ρ the intraclass correlation coefficient (ICC). This clustering effect is used during the planning of cluster randomized trials as an inflation factor to increase the sample size required by an individual randomization trial. However, such an approach does not take into account variations in cluster size, which might differ greatly. Inde
Inadequate description of educational interventions in ongoing randomized controlled trials
Cécile Pino, Isabelle Boutron, Philippe Ravaud
Trials , 2012, DOI: 10.1186/1745-6215-13-63
Abstract: On 6 May 2009, we searched for all ongoing RCTs registered in the 10 trial registries accessible through the World Health Organization International Clinical Trials Registry Platform. We included trials evaluating an educational intervention (that is, designed to teach or train patients about their own health) and dedicated to participants, their family members or home caregivers. We used a standardized data extraction form to collect data related to the description of the experimental intervention, the centers, and the caregivers.We selected 268 of 642 potentially eligible studies and appraised a random sample of 150 records. All selected trials were registered in 4 registers, mainly ClinicalTrials.gov (61%). The median [interquartile range] target sample size was 205 [100 to 400] patients. The comparator was mainly usual care (47%) or active treatment (47%). A minority of records (17%, 95% CI 11 to 23%) reported an overall adequate description of the intervention (that is, description that reported the content, mode of delivery, number, frequency, duration of sessions and overall duration of the intervention). Further, for most reports (59%), important information about the content of the intervention was missing. The description of the mode of delivery of the intervention was reported for 52% of studies, the number of sessions for 74%, the frequency of sessions for 58%, the duration of each session for 45% and the overall duration for 63%. Information about the caregivers was missing for 70% of trials. Most trials (73%) took place in the United States or United Kingdom, 64% involved only one centre, and participating centers were mainly tertiary-care, academic or university hospitals (51%).Educational interventions assessed in ongoing RCTs of educational interventions are poorly described in trial registries. The lack of adequate description raises doubts about the ability of trial registration to help patients and researchers know about the treatment evaluated i
Adjustment for reporting bias in network meta-analysis of antidepressant trials
Trinquart Ludovic,Chatellier Gilles,Ravaud Philippe
BMC Medical Research Methodology , 2012, DOI: 10.1186/1471-2288-12-150
Abstract: Background Network meta-analysis (NMA), a generalization of conventional MA, allows for assessing the relative effectiveness of multiple interventions. Reporting bias is a major threat to the validity of MA and NMA. Numerous methods are available to assess the robustness of MA results to reporting bias. We aimed to extend such methods to NMA. Methods We introduced 2 adjustment models for Bayesian NMA. First, we extended a meta-regression model that allows the effect size to depend on its standard error. Second, we used a selection model that estimates the propensity of trial results being published and in which trials with lower propensity are weighted up in the NMA model. Both models rely on the assumption that biases are exchangeable across the network. We applied the models to 2 networks of placebo-controlled trials of 12 antidepressants, with 74 trials in the US Food and Drug Administration (FDA) database but only 51 with published results. NMA and adjustment models were used to estimate the effects of the 12 drugs relative to placebo, the 66 effect sizes for all possible pair-wise comparisons between drugs, probabilities of being the best drug and ranking of drugs. We compared the results from the 2 adjustment models applied to published data and NMAs of published data and NMAs of FDA data, considered as representing the totality of the data. Results Both adjustment models showed reduced estimated effects for the 12 drugs relative to the placebo as compared with NMA of published data. Pair-wise effect sizes between drugs, probabilities of being the best drug and ranking of drugs were modified. Estimated drug effects relative to the placebo from both adjustment models were corrected (i.e., similar to those from NMA of FDA data) for some drugs but not others, which resulted in differences in pair-wise effect sizes between drugs and ranking. Conclusions In this case study, adjustment models showed that NMA of published data was not robust to reporting bias and provided estimates closer to that of NMA of FDA data, although not optimal. The validity of such methods depends on the number of trials in the network and the assumption that conventional MAs in the network share a common mean bias mechanism.
Use of Trial Register Information during the Peer Review Process
Sylvain Mathieu, An-Wen Chan, Philippe Ravaud
PLOS ONE , 2013, DOI: 10.1371/journal.pone.0059910
Abstract: Introduction Evidence in the medical literature suggests that trial registration may not be preventing selective reporting of results. We wondered about the place of such information in the peer-review process. Method We asked 1,503 corresponding authors of clinical trials and 1,733 reviewers to complete an online survey soliciting their views on the use of trial registry information during the peer-review process. Results 1,136 authors (n = 713) and reviewers (n = 423) responded (37.5%); 676 (59.5%) had reviewed an article reporting a clinical trial in the past 2 years. Among these, 232 (34.3%) examined information registered on a trial registry. If one or more items (primary outcome, eligibility criteria, etc.) differed between the registry record and the manuscript, 206 (88.8%) mentioned the discrepancy in their review comments, 46 (19.8%) advised editors not to accept the manuscript, and 8 did nothing. The reviewers' reasons for not using the trial registry information included a lack of registration number in the manuscript (n = 132; 34.2%), lack of time (n = 128; 33.2%), lack of usefulness of registered information for peer review (n = 100; 25.9%), lack of awareness about registries (n = 54; 14%), and excessive complexity of the process (n = 39; 10.1%). Conclusion This survey revealed that only one-third of the peer reviewers surveyed examined registered trial information and reported any discrepancies to journal editors.
Patient-important outcomes in systematic reviews: Poor quality of evidence
Agnes Dechartres,Philippe Ravaud,Youri Yordanov
- , 2018, DOI: 10.1371/journal.pone.0195460
Abstract:
第1页/共9741条
每页显示


Home
Copyright © 2008-2020 Open Access Library. All rights reserved.